The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do

  • Downloads:7325
  • Type:Epub+TxT+PDF+Mobi
  • Create Date:2021-06-30 06:51:29
  • Update Date:2025-09-06
  • Status:finish
  • Author:Erik J. Larson
  • ISBN:0674983513
  • Environment:PC/Android/iPhone/iPad/Kindle

Summary

"If you want to know about AI, read this book。。。it shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence。"--Peter Thiel



A cutting-edge AI researcher and tech entrepreneur debunks the fantasy that superintelligence is just a few clicks away--and argues that this myth is not just wrong, it's actively blocking innovation and distorting our ability to make the crucial next leap。

Futurists insist that AI will soon eclipse the capacities of the most gifted human mind。 What hope do we have against superintelligent machines? But we aren't really on the path to developing intelligent machines。 In fact, we don't even know where that path might be。

A tech entrepreneur and pioneering research scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape of AI to show how far we are from superintelligence, and what it would take to get there。 Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence。 This is a profound mistake。 AI works on inductive reasoning, crunching data sets to predict outcomes。 But humans don't correlate data sets: we make conjectures informed by context and experience。 Human intelligence is a web of best guesses, given what we know about the world。 We haven't a clue how to program this kind of intuitive reasoning, known as abduction。 Yet it is the heart of common sense。 That's why Alexa can't understand what you are asking, and why AI can only take us so far。

Larson argues that AI hype is both bad science and bad for science。 A culture of invention thrives on exploring unknowns, not overselling existing methods。 Inductive AI will continue to improve at narrow tasks, but if we want to make real progress, we will need to start by more fully appreciating the only true intelligence we know--our own。

Download

Reviews

Simon Rutherford

Unexpectedly hilarious。 Makes you think too!

Loren Picard

The best, most level headed, and honest take on where scientists are with AI。 No talk of cosmic endowment, killer robots, and machines replacing humans as a species。 Larson doesn't sidestep the narrow successes of AI; he explains them for what they are。 Larson explains why computers can beat humans at games, but can't understand an ambiguous sentence。 In an ironic twist, you come away from the book somewhat letdown that the idea of artificial general intelligence is nowhere in sight (there are n The best, most level headed, and honest take on where scientists are with AI。 No talk of cosmic endowment, killer robots, and machines replacing humans as a species。 Larson doesn't sidestep the narrow successes of AI; he explains them for what they are。 Larson explains why computers can beat humans at games, but can't understand an ambiguous sentence。 In an ironic twist, you come away from the book somewhat letdown that the idea of artificial general intelligence is nowhere in sight (there are no workable theories being explored), but in what is the best outcome of reading this book is you feel newly empowered as a human with an intellect that can't be duplicated。 。。。more

Bruce

This should be required reading of all human beings。 The hyperbole surrounding the idea of Artificial Intelligence has become hysterical and those who subscribe to the hysteria the new "FLAT EARTHERS" of the 21st century。 The fear surrounding Artificial Intelligence is puerile paranoid ignorance of Logic。 This should be required reading of all human beings。 The hyperbole surrounding the idea of Artificial Intelligence has become hysterical and those who subscribe to the hysteria the new "FLAT EARTHERS" of the 21st century。 The fear surrounding Artificial Intelligence is puerile paranoid ignorance of Logic。 。。。more

Eric Holloway

important contrarianLots of great tidbits, like how winograd schemas have resisted even big data and deep learning。 Also interesting the abductive form, and its neglect in AI research。 A sobering look at the harm posed by the myth of AI's inevitability。 Seems sinisterly similar to Marx's claim of communism's inevitability。 important contrarianLots of great tidbits, like how winograd schemas have resisted even big data and deep learning。 Also interesting the abductive form, and its neglect in AI research。 A sobering look at the harm posed by the myth of AI's inevitability。 Seems sinisterly similar to Marx's claim of communism's inevitability。 。。。more

Ben Chugg

There is a prevailing dogma that achieving "artificial general intelligence" will require nothing more than bigger and better machine learning models。 Add more layers, add more data, create better optimization algorithms and voila: a system as general purpose as humans but infinitely superior in their processing speed。 Nobody quite knows exactly how this jump from narrow AI (good on a particular, very well defined task) to general AI will happen, but that hasn't stopped many from building career There is a prevailing dogma that achieving "artificial general intelligence" will require nothing more than bigger and better machine learning models。 Add more layers, add more data, create better optimization algorithms and voila: a system as general purpose as humans but infinitely superior in their processing speed。 Nobody quite knows exactly how this jump from narrow AI (good on a particular, very well defined task) to general AI will happen, but that hasn't stopped many from building careers based on erroneous predictions, or prophesying that such a development spells the doom of the human race。 The AI space is dominated by vague arguments and absolute certainty in the conclusions。 Onto the scene steps Erik Larson, an engineer who understands both how these systems work and their philosophical assumptions。 Larson points out that all our machine learning models are built on induction: inferring general patterns from specific observations。 We feed an algorithm 10,000 labelled pictures and it infers which relationships among the pixels are most likely to predict "cat"。 Some models are faster than others, more clever in their pattern recognition, and so on, but at bottom they're all doing the same thing: correlating datasets。 We know of only one system capable of universal intelligence: human brains。 And humans don't learn by induction。 We don't infer the general from the specific。 Instead, we guess the general and use the specifics to refute our guesses。 We use our creativity to conjecture aspects of the world (space-time is curved, Ryan is lying, my shoes are in my backpack), and use empirical observations to disavow us of those ideas that are false。 This is why humans are capable of developing general theories of the world。 Induction implies that you can only know what you see (a philosophy called "empiricism") - but that's false (we've never seen the inside of a star, yet we develop theories which explain the phenomena)。 Charles Sanders Pierce called the method of guessing and checking "abduction。" And we have no good theory for abduction。 To have one, we would have to better understand human creativity, which plays a central role in knowledge creation。 In other words, we need a philosophical and scientific revolution before we can possibly generate true artificial intelligence。 As long as we keep relying on induction, machines will be forever constrained by what data they are fed。 Larson argues that the philosophical confusion over induction and the current focus on "big-data" is infecting other areas of science。 Many neuroscience departments have forgotten the role that theories play in advancing our knowledge, and are hoping that a true understanding of the human brain will be borne out of simply mapping it more accurately。 But this is hopeless。 Even after having developed an accurate map, what will you look for? There is no such thing as observation without theory。 At a time when it's in fashion to point out all the biases and "irrationalities" in human thinking, hopefully the book helps remind us of the amazing ability of humans to create general purpose knowledge。 Highly recommended read。 。。。more